Overview of Model Evaluation
The Evaluation section provides valuable insights into the performance of fine-tuned AI models on the platform. This section helps users assess the results of their fine-tuning tasks, enabling better decision-making during model development.
Key Features:
-
Fine-Tuning Evaluation Graph:
- The graph visually tracks the progress of the fine-tuning process, with key metrics displayed:
- Green Line: Represents successful fine-tuning evaluations, indicating that the model has been fine-tuned successfully.
- Red Line: Indicates errors or failed evaluations during the fine-tuning process.
- X-Axis: Shows the timeline or the number of evaluations over time.
- Y-Axis: Represents the count of successful or failed evaluations.
- The graph visually tracks the progress of the fine-tuning process, with key metrics displayed:
-
Model Status Table:
- Below the graph, a detailed table presents information about each fine-tuning task, including:
- Model: Name of the fine-tuned model.
- Created At: Timestamp when fine-tuning started.
- Finished At: Timestamp when fine-tuning was completed.
- Fine-Tuned Model: Name of the resulting model after fine-tuning.
- Status: Indicates whether the fine-tuning task succeeded or failed.
- Error: Displays error details if the task failed.
- Below the graph, a detailed table presents information about each fine-tuning task, including:
Best Practices for Using the Evaluation Section:
-
Monitor Fine-Tuning Progress:
- Use the green line to track successful evaluations and confirm that models are being fine-tuned as expected.
- Compare the green line with the red line to identify patterns of errors that may require attention.
-
Analyze Performance Metrics:
- Visualize the success and failure trends over time to better understand how well the models are performing during fine-tuning.
- Adjust fine-tuning parameters based on these insights to optimize future results.
-
Investigate Fine-Tuning Details:
- The status table helps you track the start and end times of each fine-tuning task. You can also review any failed tasks by examining the error messages.
Example Screenshot:
The example screenshot below shows the Fine-Tuning Evaluation Graph, where the rising green line indicates successful evaluations, while the red line tracks errors. Beneath the graph, the status table provides key details about the models and their fine-tuning progress.
By using the Evaluation section, users can track the performance of their model fine-tuning in real-time and make data-driven adjustments to improve future outcomes. This ensures more efficient model optimization and development.